TOPICS 

    Subscribe to our newsletter

     By signing up, you agree to our Terms Of Use.

    FOLLOW US

    • About Us
    • |
    • Contribute
    • |
    • Contact Us
    • |
    • Sitemap
    封面
    VOICES & OPINION

    How Does Chinese Media Write About AI?

    The country’s media outlets have published dozens of investigations into how artificial intelligence is affecting people’s lives and work.

    Artificial intelligence is in everything these days, from ride-hailing apps to short videos — a meteoric rise that is raising concerns about big tech and the pervasive influence of algorithms in our daily lives. Last March, European legislators adopted the Artificial Intelligence Act, demanding greater transparency from AI developers. In China and the United States, regulators have tried to strike a balance between accelerating tech innovation and reining in its power. But public scrutiny has only intensified in recent years with the rise of generative AI, not to mention algorithm-powered social media apps.

    These developments have raised serious questions about media outlets’ ability to hold algorithms and their developers accountable. In theory, good tech reporting should not only raise public awareness of important issues but also keep big tech firms honest. So it’s worth asking whether news outlets are informing the public about the actual and potential risks associated with the use of AI, how these investigations are carried out, and if they have been effective.

    To answer these questions in the Chinese context, my research partners and I collected 23 of the most influential investigative reports on AI published on WeChat, China’s leading social media app and the place most Chinese get their news, between 2019 and 2023. Unsurprisingly, these reports were generally skeptical, if not hostile, to AI. But a closer look at their perspectives and focuses sheds light on how the rise of AI has opened a space for critical journalism in China.

    We found three major critiques that Chinese investigative journalists offered of AI and algorithmic systems. The first was the invasion of mechanical logic into domains that have traditionally been considered quintessential human activities, such as dating. A 2019 report from Vista View magazine, for example, detailed how Chinese online daters were trained by matchmaking apps to click the “right” profiles for optimized recommendation results, instead of following their own feelings.

    The second major critique of AI involved the way it objectifies everyday life. Multiple investigations examined how app users’ demographics and behaviors were recorded, encoded, and fed into an algorithmic black box. This process helped build addictive algorithms that their engineers could barely understand, even as they rolled them out in search of profit.

    Finally, Chinese investigative journalists have paid special attention to AI’s reduction of human agency, especially in the workplace. A common metaphor for this, “being trapped in the system,” was coined in a People Magazine investigation of food delivery riders who had to risk their safety and even their lives to meet unreasonable delivery timelines — or else pay fines to the delivery service platforms. Another report, “My Boss Isn’t Human,” investigated the rollout of surveillance and automatic decision-making systems in factories and offices for the purpose of maximizing work efficiency. In both stories, workers had lost control of their time to an opaque and unaccountable system.

    These critiques are not unique to China, of course. Chinese journalists share a common perspective with their Western counterparts as they warn of algorithms’ unaccountability. But we found that Chinese investigative reporters put greater emphasis on everyday life, such as a young professional’s dating experience or a gig worker’s hours. By contrast, in the United States, another AI power, investigative reporters focus more on uncovering AI’s discriminatory effects. For example, ProPublica’s 2016 report “Machine Bias” revealed how a piece of software used in the U.S. to predict criminality was biased against the country’s Black population. American and European media also tend to be more interested in monitoring regulatory moves and major lawsuits against big tech, while Chinese media seldom follow these developments closely before an official decision is made.

    Some of these differences can be attributed to the different roles that media play in China and elsewhere, but it is equally important to recognize the influence of each country’s social context in framing critical journalism. Chinese internet watchers used to joke that “different soils nourish different AIs,” a nod to the way the training and deployment of large language models have been conditioned by the corpus of a language available online and a country’s regulatory framework. This is true for AI journalism, too: The “new” issues raised by China’s AI push are often variations of existing problems, such as price discrimination and overwork. In some cases, these investigations didn’t even start with AI. For example, the reporter who wrote the People article later recalled that her initial focus was the high accident rate among food delivery riders, but her investigation then led her to algorithmic systems that were gamifying labor processes at the workers’ expense.

    The fact that technology is so embedded in contemporary society has allowed Chinese critical journalism to bring powerful tech companies under social scrutiny and pave the way for policy changes. After People’s report went viral, two major companies pledged to adjust their apps and improve riders’ working conditions. More recently, mounting media pressure has helped curb the proliferation of facial recognition systems used at hotel check-ins. It should be noted, however, that it remains a challenge to audit the efficacy of improvement measures taken by tech companies, and investigations are more likely to bring about concrete changes when they resonate with broader agendas set by regulatory bodies.

    Even if a quick response is not guaranteed, AI-critical journalism can still make an impact in a deeper sense by shaping people’s understanding of algorithms. Two influential reports detailed how young Chinese are “fighting against algorithms” to take back control of their lives — a process that involves clearing their browser cookies, lingering on uninteresting content to rebalance their recommendations, or simply adhering to the principles of “not logging in, not liking, not following, and not commenting.” By informing people of how their data is being collected and processed, it is possible that Chinese investigative journalism can enhance the public’s tech literacy, allowing for the more responsible use of AI and less algorithmic manipulation.

    Joanne Kuai, a Ph.D. candidate at Karlstad University, made an equal contribution to this article.

    Editor: Cai Yineng; portrait artist: Zhou Zhen.